APPLICATION OF COMPUTER VISION TECHNOLOGIES FOR AUTONOMOUS PILE MANIPULATION
نویسندگان
چکیده
منابع مشابه
Perceiving, Learning, and Exploiting Object Affordances for Autonomous Pile Manipulation
Autonomous manipulation in unstructured environments presents roboticists with three fundamental challenges: object segmentation, action selection, and motion generation. These challenges become more pronounced when unknown manmade or natural objects are cluttered together in a pile. We present an end-to-end approach to the problem of manipulating unknown objects in a pile, with the objective o...
متن کاملCad-based Robot Vision for Autonomous Manipulation
This paper presents some features of a vision system for autonomous manipulation that we are developing in the framework of a project concerning unmanned space missions. The main goal is accurate estimation of the robot pose by 3D scene reconstruction and matching against an a priori CAD model of the environment. Results of some experimental tests on a “blocks world” are presented. These result...
متن کاملRobot Vision Architecture for Autonomous Clothes Manipulation
This paper presents a novel robot vision architecture for perceiving generic 3D clothes configurations. Our architecture is hierarchically structured, starting from low-level curvatures, across mid-level geometric shapes & topology descriptions; and finally approaching high-level semantic surface structure descriptions. We demonstrate our robot vision architecture in a customised dual-arm indus...
متن کاملComputer vision for assistive technologies
In the last decades there has been a tremendous increase in demand for Assistive Technologies (AT) useful to overcome functional limitations of individuals and to improve their quality of life. As a consequence, different research papers addressing the development of assistive technologies have appeared into the literature pushing the need to organize and categorize them taking into account the...
متن کاملAutonomous Vision-Guided Robot Manipulation Control
This paper presents a hand-eye-head system which learns to perform temporal actions. In this system, the robot learns how to control its hand according to what is seen and the speciic mission. The learning process is interactive and on-line. Several networks which store the learned information are automatically generated through learning processes. The hierarchical structure of the network allo...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: ENVIRONMENT. TECHNOLOGIES. RESOURCES. Proceedings of the International Scientific and Practical Conference
سال: 2019
ISSN: 2256-070X,1691-5402
DOI: 10.17770/etr2019vol2.4033